Use AutoAI with Watson Studio project ibm-watsonx-ai¶

This notebook contains the steps and code to demonstrate support of AutoAI experiments in watsonx.ai service inside Watson Studio's projects. It introduces commands for data retrieval, training experiments and scoring.

Some familiarity with Python is helpful. This notebook uses Python 3.12.

Learning goals¶

The learning goals of this notebook are:

  • Work with watsonx.ai experiments to train AutoAI model using Watson Studio project.
  • Online, Batch deployment and score the trained model trained.

Contents¶

This notebook contains the following parts:

  1. Setup
  2. Optimizer definition
  3. Experiment Run
  4. Deploy and Score
  5. Clean up
  6. Summary and next steps

1. Set up the environment¶

Before you use the sample code in this notebook, you must perform the following setup tasks:

  • Contact with your Cloud Pak for Data administrator and ask them for your account credentials

Install dependencies¶

Note: ibm-watsonx-ai documentation can be found here.

In [ ]:
%pip install -U wget | tail -n 1
%pip install -U nbformat | tail -n 1
%pip install -U autoai-libs | tail -n 1
%pip install -U ibm-watsonx-ai | tail -n 1
Successfully installed wget-3.2
Successfully installed fastjsonschema-2.21.1 nbformat-5.10.4
Successfully installed autoai-libs-3.0.3
Successfully installed ibm-watsonx-ai-1.3.20

Define credentials¶

Authenticate the watsonx.ai Runtime service on IBM Cloud Pak for Data. You need to provide the admin's username and the platform url.

In [2]:
username = "PASTE YOUR USERNAME HERE"
url = "PASTE THE PLATFORM URL HERE"

Use the admin's api_key to authenticate watsonx.ai Runtime services:

In [ ]:
import getpass
from ibm_watsonx_ai import Credentials

credentials = Credentials(
    username=username,
    api_key=getpass.getpass("Enter your watsonx.ai API key and hit enter: "),
    url=url,
    instance_id="openshift",
    version="5.2",
)

Alternatively you can use the admin's password:

In [3]:
import getpass
from ibm_watsonx_ai import Credentials

if "credentials" not in locals() or not credentials.api_key:
    credentials = Credentials(
        username=username,
        password=getpass.getpass("Enter your watsonx.ai password and hit enter: "),
        url=url,
        instance_id="openshift",
        version="5.2",
    )
Enter your watsonx.ai password and hit enter:  ········

Create APIClient instance¶

In [4]:
from ibm_watsonx_ai import APIClient

client = APIClient(credentials)

Working with spaces¶

First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use {PLATFORM_URL}/ml-runtime/spaces?context=icp4data to create one.

  • Click New Deployment Space
  • Create an empty space
  • Go to space Settings tab
  • Copy space_id and paste it below

Tip: You can also use SDK to prepare the space for your work. More information can be found here.

Action: Assign space ID below

In [5]:
space_id = "PASTE YOUR SPACE ID HERE"

You can use the list method to print all existing spaces.

In [ ]:
client.spaces.list(limit=10)

Working with projects¶

First of all, you need to create a project that will be used for your work. If you do not have project already created follow bellow steps.

  • Open IBM Cloud Pak main page
  • Click all projects
  • Create an empty project
  • Copy project_id from url and paste it below

Action: Assign project ID below

In [6]:
import os

try:
    project_id = os.environ["PROJECT_ID"]
except KeyError:
    project_id = input("Please enter your project_id (hit enter): ")

To be able to interact with all resources available in watsonx.ai, you need to set the project which you will be using.

In [7]:
client.set.default_project(project_id)
Out[7]:
'SUCCESS'

2. Optimizer definition¶

Training data connection¶

Define connection information to training data CSV file. This example uses the German Credit Risk dataset.

The dataset can be downloaded from here.

In [8]:
import wget

filename = "german_credit_data_biased_training.csv"
base_url = "https://raw.githubusercontent.com/IBM/watsonx-ai-samples/master/cpd5.2/data/credit_risk/"

if not os.path.isfile(filename):
    wget.download(base_url + filename)
In [9]:
asset_details = client.data_assets.create(
    "german_credit_data_biased_training", filename
)
asset_details
Creating data asset...
SUCCESS
Out[9]:
{'metadata': {'project_id': '9cf5ee36-da98-4856-be9f-a04df0d43f7b',
  'sandbox_id': '9cf5ee36-da98-4856-be9f-a04df0d43f7b',
  'usage': {'last_updated_at': '2025-05-22T07:01:16Z',
   'last_updater_id': '1000331001',
   'last_update_time': 1747897276642,
   'last_accessed_at': '2025-05-22T07:01:16Z',
   'last_access_time': 1747897276642,
   'last_accessor_id': '1000331001',
   'access_count': 0},
  'rov': {'mode': 0,
   'collaborator_ids': {},
   'member_roles': {'1000331001': {'user_iam_id': '1000331001',
     'roles': ['OWNER']}}},
  'is_linked_with_sub_container': False,
  'name': 'german_credit_data_biased_training',
  'description': '',
  'asset_type': 'data_asset',
  'origin_country': 'us',
  'resource_key': 'german_credit_data_biased_training',
  'rating': 0.0,
  'total_ratings': 0,
  'catalog_id': '262095aa-b785-431f-94fa-06d0d40855c5',
  'created': 1747897276642,
  'created_at': '2025-05-22T07:01:16Z',
  'owner_id': '1000331001',
  'size': 0,
  'version': 2.0,
  'asset_state': 'available',
  'asset_attributes': ['data_asset'],
  'asset_id': '48ccfa1c-bdaf-4bcf-b7ce-fe16fb650e5e',
  'asset_category': 'USER',
  'creator_id': '1000331001',
  'is_branched': True,
  'guid': '48ccfa1c-bdaf-4bcf-b7ce-fe16fb650e5e',
  'href': '/v2/assets/48ccfa1c-bdaf-4bcf-b7ce-fe16fb650e5e?project_id=9cf5ee36-da98-4856-be9f-a04df0d43f7b',
  'last_updated_at': '2025-05-22T07:01:16Z'},
 'entity': {'data_asset': {'mime_type': 'text/csv'}}}
In [10]:
client.data_assets.get_id(asset_details)
Out[10]:
'48ccfa1c-bdaf-4bcf-b7ce-fe16fb650e5e'
In [11]:
from ibm_watsonx_ai.helpers import DataConnection


credit_risk_conn = DataConnection(
    data_asset_id=client.data_assets.get_id(asset_details)
)

training_data_reference = [credit_risk_conn]

Optimizer configuration¶

Provide the input information for AutoAI optimizer:

  • name - experiment name
  • prediction_type - type of the problem
  • prediction_column - target column name
  • scoring - optimization metric
In [12]:
from ibm_watsonx_ai.experiment import AutoAI

experiment = AutoAI(credentials, project_id)

pipeline_optimizer = experiment.optimizer(
    name="Credit Risk Prediction - AutoAI",
    desc="Sample notebook",
    prediction_type=AutoAI.PredictionType.BINARY,
    prediction_column="Risk",
    scoring=AutoAI.Metrics.ROC_AUC_SCORE,
)

Configuration parameters can be retrieved via get_params().

In [13]:
pipeline_optimizer.get_params()
Out[13]:
{'name': 'Credit Risk Prediction - AutoAI',
 'desc': 'Sample notebook',
 'prediction_type': 'binary',
 'prediction_column': 'Risk',
 'prediction_columns': None,
 'timestamp_column_name': None,
 'scoring': 'roc_auc',
 'holdout_size': None,
 'max_num_daub_ensembles': None,
 't_shirt_size': 'm',
 'train_sample_rows_test_size': None,
 'include_only_estimators': None,
 'include_batched_ensemble_estimators': None,
 'backtest_num': None,
 'lookback_window': None,
 'forecast_window': None,
 'backtest_gap_length': None,
 'cognito_transform_names': None,
 'csv_separator': ',',
 'excel_sheet': None,
 'encoding': 'utf-8',
 'positive_label': None,
 'drop_duplicates': True,
 'outliers_columns': None,
 'text_processing': None,
 'word2vec_feature_number': None,
 'daub_give_priority_to_runtime': None,
 'text_columns_names': None,
 'sampling_type': None,
 'sample_size_limit': None,
 'sample_rows_limit': None,
 'sample_percentage_limit': None,
 'number_of_batch_rows': None,
 'n_parallel_data_connections': None,
 'test_data_csv_separator': ',',
 'test_data_excel_sheet': None,
 'test_data_encoding': 'utf-8',
 'categorical_imputation_strategy': None,
 'numerical_imputation_strategy': None,
 'numerical_imputation_value': None,
 'imputation_threshold': None,
 'retrain_on_holdout': True,
 'feature_columns': None,
 'pipeline_types': None,
 'supporting_features_at_forecast': None,
 'numerical_columns': None,
 'categorical_columns': None,
 'confidence_level': None,
 'incremental_learning': None,
 'early_stop_enabled': None,
 'early_stop_window_size': None,
 'time_ordered_data': None,
 'feature_selector_mode': None,
 'run_id': None}

3. Experiment run¶

Call the fit() method to trigger the AutoAI experiment. You can either use interactive mode (synchronous job) or background mode (asychronous job) by specifying background_model=True.

In [14]:
run_details = pipeline_optimizer.fit(
    training_data_reference=training_data_reference, background_mode=False
)
Training job f14dfbfd-efc9-4e05-9922-3b6daaac4e25 completed: 100%|████████| [02:35<00:00,  1.55s/it]

You can use the get_run_status() method to monitor AutoAI jobs in background mode.

In [15]:
pipeline_optimizer.get_run_status()
Out[15]:
'completed'

3.1 Pipelines comparison¶

You can list trained pipelines and evaluation metrics information in the form of a Pandas DataFrame by calling the summary() method. You can use the DataFrame to compare all discovered pipelines and select the one you like for further testing.

In [16]:
summary = pipeline_optimizer.summary()
summary
Out[16]:
Enhancements Estimator training_roc_auc_(optimized) holdout_average_precision holdout_log_loss training_accuracy holdout_roc_auc training_balanced_accuracy training_f1 holdout_precision training_average_precision training_log_loss holdout_recall training_precision holdout_accuracy holdout_balanced_accuracy training_recall holdout_f1
Pipeline Name
Pipeline_2 HPO XGBClassifier 0.853281 0.479185 0.425331 0.800358 0.832534 0.748267 0.857836 0.816216 0.916755 0.428845 0.909639 0.814358 0.803607 0.751226 0.906382 0.860399
Pipeline_1 XGBClassifier 0.848451 0.466398 0.356549 0.796120 0.829125 0.749861 0.852969 0.848066 0.912986 0.439419 0.924699 0.818963 0.839679 0.797679 0.890274 0.884726
Pipeline_6 SnapBoostingMachineClassifier 0.850092 0.468067 0.403035 0.755522 0.825103 0.745869 0.808007 0.893333 0.915222 0.457473 0.807229 0.844414 0.807615 0.807806 0.775171 0.848101
Pipeline_3 HPO, FE XGBClassifier 0.852998 0.480987 0.433601 0.799690 0.823101 0.744796 0.858098 0.807487 0.915761 0.428990 0.909639 0.810770 0.795591 0.739250 0.911415 0.855524
Pipeline_7 HPO SnapBoostingMachineClassifier 0.851431 0.477125 0.456636 0.749054 0.822352 0.747928 0.798939 0.883803 0.916453 0.466879 0.756024 0.853730 0.771543 0.779210 0.751345 0.814935
Pipeline_4 HPO, FE, HPO XGBClassifier 0.853515 0.483860 0.444985 0.800582 0.820594 0.743488 0.859371 0.796296 0.916422 0.429937 0.906627 0.808763 0.783567 0.722774 0.916783 0.847887
Pipeline_5 HPO, FE, HPO, Ensemble BatchedTreeEnsembleClassifier(XGBClassifier) 0.853515 0.483860 0.444985 0.800582 0.820594 0.743488 0.859371 0.796296 0.916422 0.429937 0.906627 0.808763 0.783567 0.722774 0.916783 0.847887
Pipeline_9 HPO, FE, HPO SnapBoostingMachineClassifier 0.854561 0.471638 0.424094 0.762884 0.819223 0.753056 0.814332 0.882943 0.917152 0.455770 0.795181 0.848812 0.793587 0.792800 0.782893 0.836767
Pipeline_10 HPO, FE, HPO, Ensemble BatchedTreeEnsembleClassifier(SnapBoostingMach... 0.854561 0.471638 0.424094 0.762884 0.819223 0.753056 0.814332 0.882943 0.917152 0.455770 0.795181 0.848812 0.793587 0.792800 0.782893 0.836767
Pipeline_8 HPO, FE SnapBoostingMachineClassifier 0.854432 0.475277 0.449666 0.757978 0.818060 0.752334 0.808503 0.882353 0.917388 0.461020 0.768072 0.852192 0.777555 0.782240 0.769471 0.821256

You can visualize the scoring metric calculated on a holdout data set.

In [17]:
import pandas as pd

pd.options.plotting.backend = "plotly"

summary.holdout_roc_auc.plot()

4. Deploy and Score¶

In this section you will learn how to deploy and score trained model using project in a specified deployment space as a webservice and batch using WML instance.

Webservice deployment creation¶

In [18]:
from ibm_watsonx_ai.deployment import WebService

service = WebService(
    source_instance_credentials=credentials,
    source_project_id=project_id,
    target_instance_credentials=credentials,
    target_space_id=space_id,
)

service.create(
    experiment_run_id=run_details["metadata"]["id"],
    model="Pipeline_1",
    deployment_name="Credit Risk Deployment AutoAI",
)
Preparing an AutoAI Deployment...
Published model uid: 1061e089-d4c3-4cf1-b71d-523b30da2562
Deploying model 1061e089-d4c3-4cf1-b71d-523b30da2562 using V4 client.


######################################################################################

Synchronous deployment creation for id: '1061e089-d4c3-4cf1-b71d-523b30da2562' started

######################################################################################


initializing
Note: online_url is deprecated and will be removed in a future release. Use serving_urls instead.
.......
ready


-----------------------------------------------------------------------------------------------
Successfully finished deployment creation, deployment_id='39b1ac37-f47f-480c-a103-d0d3be5b68e2'
-----------------------------------------------------------------------------------------------


Deployment object could be printed to show basic information:

In [ ]:
print(service)

To show all available information about the deployment use the .get_params() method:

In [ ]:
service.get_params()

Scoring of webservice¶

You can make scoring request by calling score() on deployed pipeline.

In [21]:
train_df = pipeline_optimizer.get_data_connections()[0].read()

train_X = train_df.drop(["Risk"], axis=1)
train_y = train_df.Risk.values
In [22]:
predictions = service.score(payload=train_X.iloc[:10])
predictions
Out[22]:
{'predictions': [{'fields': ['prediction', 'probability'],
   'values': [['No Risk', [0.9059737920761108, 0.09402620792388916]],
    ['No Risk', [0.9039297699928284, 0.09607024490833282]],
    ['No Risk', [0.8551719188690186, 0.14482809603214264]],
    ['No Risk', [0.7936931848526001, 0.2063068449497223]],
    ['Risk', [0.11071383953094482, 0.8892861604690552]],
    ['Risk', [0.035136282444000244, 0.9648637175559998]],
    ['No Risk', [0.8070950508117676, 0.19290491938591003]],
    ['No Risk', [0.821358323097229, 0.17864170670509338]],
    ['No Risk', [0.9476691484451294, 0.05233084037899971]],
    ['Risk', [0.01895737648010254, 0.9810426235198975]]]}]}

If you want to work with the web service in an external Python application you can retrieve the service object by:

  • Initialize the service by service = WebService(wml_credentials)
  • Get deployment_id by service.list() method
  • Get webservice object by service.get('deployment_id') method

After that you can call service.score() method.

Deleting deployment¶

You can delete the existing deployment by calling the service.delete() command. To list the existing web services you can use service.list().

Batch deployment creation¶

A batch deployment processes input data from a inline data and return predictions in scoring details or processes from data asset and writes the output to a file.

In [23]:
batch_payload_df = train_df.drop(["Risk"], axis=1)[:5]
batch_payload_df
Out[23]:
CheckingStatus LoanDuration CreditHistory LoanPurpose LoanAmount ExistingSavings EmploymentDuration InstallmentPercent Sex OthersOnLoan CurrentResidenceDuration OwnsProperty Age InstallmentPlans Housing ExistingCreditsCount Job Dependents Telephone ForeignWorker
0 0_to_200 31.0 credits_paid_to_date other 1889.0 100_to_500 less_1 3.0 female none 3.0 savings_insurance 32.0 none own 1.0 skilled 1.0 none yes
1 less_0 18.0 credits_paid_to_date car_new 462.0 less_100 1_to_4 2.0 female none 2.0 savings_insurance 37.0 stores own 2.0 skilled 1.0 none yes
2 less_0 15.0 prior_payments_delayed furniture 250.0 less_100 1_to_4 2.0 male none 3.0 real_estate 28.0 none own 2.0 skilled 1.0 yes no
3 0_to_200 28.0 credits_paid_to_date retraining 3693.0 less_100 greater_7 3.0 male none 2.0 savings_insurance 32.0 none own 1.0 skilled 1.0 none yes
4 no_checking 28.0 prior_payments_delayed education 6235.0 500_to_1000 greater_7 3.0 male none 3.0 unknown 57.0 none own 2.0 skilled 1.0 none yes

Create batch deployment for Pipeline_2 created in AutoAI experiment with the run_id.

In [24]:
from ibm_watsonx_ai.deployment import Batch

service_batch = Batch(
    source_wml_credentials=credentials,
    source_project_id=project_id,
    target_wml_credentials=credentials,
    target_space_id=space_id,
)
service_batch.create(
    experiment_run_id=run_details["metadata"]["id"],
    model="Pipeline_2",
    deployment_name="Credit Risk Batch Deployment AutoAI",
)
Preparing an AutoAI Deployment...
Published model uid: e1d9fd9a-cb71-4574-a21d-51babcd43b84
Deploying model e1d9fd9a-cb71-4574-a21d-51babcd43b84 using V4 client.


######################################################################################

Synchronous deployment creation for id: 'e1d9fd9a-cb71-4574-a21d-51babcd43b84' started

######################################################################################


ready.


-----------------------------------------------------------------------------------------------
Successfully finished deployment creation, deployment_id='d8d42b25-c36e-4748-99ff-aa2efbc090e9'
-----------------------------------------------------------------------------------------------


Score batch deployment with inline payload as pandas DataFrame.¶

In [25]:
scoring_params = service_batch.run_job(payload=batch_payload_df, background_mode=False)

##########################################################################

Synchronous scoring for id: 'b71888ba-ad6e-4ee6-870c-9e84a6c265e6' started

##########################################################################


queued...
completed
Scoring job  'b71888ba-ad6e-4ee6-870c-9e84a6c265e6' finished successfully.
In [26]:
scoring_params["entity"]["scoring"].get("predictions")
Out[26]:
[{'fields': ['prediction', 'probability'],
  'values': [['No Risk', [0.7898226380348206, 0.21017737686634064]],
   ['No Risk', [0.8523699045181274, 0.14763011038303375]],
   ['No Risk', [0.9065538644790649, 0.09344614297151566]],
   ['No Risk', [0.7669816017150879, 0.2330184131860733]],
   ['Risk', [0.27467256784439087, 0.7253274321556091]]]}]

Deleting deployment¶

You can delete the existing deployment by calling the service_batch.delete() command. To list the existing:

  • batch services you can use service_batch.list(),
  • scoring jobs you can use service_batch.list_jobs().

5. Clean up¶

If you want to clean up all created assets:

  • experiments
  • trainings
  • pipelines
  • model definitions
  • models
  • functions
  • deployments

please follow up this sample notebook.

6. Summary and next steps¶

You successfully completed this notebook!

You learned how to use watsonx.ai to run AutoAI experiments using Watson Studio project.

Check out our Online Documentation for more samples, tutorials, documentation, how-tos, and blog posts.

Authors¶

Jan Sołtysik, Intern in watsonx.ai

Mateusz Szewczyk, Software Engineer at watsonx.ai.

Copyright © 2020-2025 IBM. This notebook and its source code are released under the terms of the MIT License.